ECE1508: One Shot Information Theory

 

Course Description:

This course introduces modern techniques for proving coding theorems in Information theory that do not rely on the classical assumptions of long block-lengths. Coding theorems based on importance sampling, rejection sampling and the Poisson Functional Representation Lemma will be presented for source coding. Connections to deep learning approaches to data compression and the Rate-Distortion-Perception function will be introduced. Coding theorems for channel coding based on the Poisson Matching Lemma and the Importance matching lemma will be presented. As time permits connections to differential privacy and speculative decoding for large language models will be presented.

 

Lectures: Wednesdays 3:10-5pm (BA4164) starting Jan 7 2026

Instructor: Ashish Khisti

 

Lectures will be delivered by the course instructor based on the following papers:

·      Li, Cheuk Ting, and Abbas El Gamal. "Strong functional representation lemma and applications to coding theorems." IEEE Transactions on Information Theory 64.11 (2018): 6967-6978. (link)

·      Li, Cheuk Ting, and Venkat Anantharam. "A unified framework for one-shot achievability via the Poisson matching lemma." IEEE Transactions on Information Theory 67.5 (2021): 2624-2651. (link)

·      Phan, Buu, Ashish Khisti, and Christos Louizos. "Importance matching lemma for lossy compression with side information." International Conference on Artificial Intelligence and Statistics, 2024. (link)

·      Phan, Buu, and Ashish Khisti. "Channel Simulation and Distributed Compression with Ensemble Rejection Sampling." NeurIPS 2025 (link)

·      Rowan, Joseph, Buu Phan, and Ashish Khisti. "List-Level Distribution Coupling with Applications to Speculative Decoding and Lossy Compression." NeurIPS 2025. (link)

·      Maddison, Chris J. "A Poisson process model for Monte Carlo." Perturbation, Optimization, and Statistics (2016): 193-232. (link)

·      Theis, Lucas, and Noureldin Y. Ahmed. "Algorithms for the communication of samples." International Conference on Machine Learning, 2022.(link)

·      Harsha, P., Jain, R., McAllester, D., & Radhakrishnan, J. (2007, June). The communication complexity of correlation. In Twenty-Second Annual IEEE Conference on Computational Complexity (CCC'07) (pp. 10-23).  (link)

·      Liu, Y., Chen, W. N., Ozgur, A., & Li, C. T. (2024). Universal exact compression of differentially private mechanisms. Advances in Neural Information Processing Systems, 37, 91492-91531. (link)

 

Grading: Grading will be based on  problem sets (20%), mid-term (20%), Final exam (20%) and Final course project (40%).